Upcoming Event: Center for Autonomy Seminar
Yorie Nakahira, Assistant Professor, Department of Electrical and Computer Engineering at Carnegie Mellon University
11 – 12:30PM
Wednesday Mar 4, 2026
POB 6.304
Autonomous systems must operate safely in uncertain, interactive, and nonstationary environments alongside humans. To enhance their capabilities, we are developing techniques for stochastic safe control, uncertainty and risk quantification, adaptation, and language-guided control. In this talk, we begin with quantifying and assuring safety from data in the presence of latent risks. Many systems contain unobservable variables that render system dynamics partially unidentifiable or induce distribution shifts between offline and online statistics, even when the underlying dynamics remain unchanged. Such “spurious” distribution shifts often break standard approaches for risk quantification and stochastic safe control. To overcome this, we propose a framework for designing data-driven safety certificates for systems with latent risks. On the inference side, the framework employs physics-informed learning/reinforcement learning (RL) to estimate long-term risk from data without sufficient risk events, while exploiting structural properties such as low-dimensional risk representations and graph decompositions in multi-agent systems. On the control side, it builds on a new notion of invariance, termed probabilistic invariance, which allows safety conditions to be constructed from data, despite spurious distribution shifts and partially unidentifiable dynamics. Next, we introduce our work toward achieving lifelong safety in systems with self-seeking humans or adaptive opponents. Through modeling and experiments with human subjects, we find that worst-case control can inadvertently induce adversarial opponent adaptation that increases risk in future interactions. This observation mirrors the empirical literature on social dilemmas and human risk compensation, yet its implications for lifelong risk in nonstationary interactions have rarely been investigated. This result also suggests an underexplored potential to proactively shape desirable opponent adaptations for enhanced performance and safety. Finally, we will present our ongoing work on uncertainty quantification in neural networks, sequential fine-tuning of Bayesian transformers, and language-guided control. At the core of our approach are analytic solutions for the moments of random variables passed through nonlinear activation functions. These solutions enable a moment propagation method that tracks mean vectors and covariance matrices across networks, providing sample-free tools for uncertainty quantification and robustness analysis. Building on this, we introduce a double-Bayesian framework for sequential transformer fine-tuning. This framework reformulates fine-tuning as posterior inference across layers and time (data), which is reducible to one-pass propagation of analytic formulas, eliminating the need for iterative gradient computation. These techniques are particularly useful for language-guided control, where uncertainty estimates are needed for robust decision-making.
Yorie Nakahira is an Assistant Professor in the Department of Electrical and Computer Engineering at Carnegie Mellon University. She received a B.E. in Control and Systems Engineering from Tokyo Institute of Technology and a Ph.D. in Control and Dynamical Systems from California Institute of Technology. Her research goal is to develop control and learning techniques that enhance the capabilities of autonomous systems. On the algorithm side, her group studies robust and safe control, uncertainty and risk quantification, adaptation algorithms, and language-guided control. On the application side, her group explores diverse topics, ranging from autonomous control systems to human sensorimotor control to poverty alleviation policy design. She has received four young investigator awards, including NSF CAREER, and holds a part-time position at the Research and Development Center for Large Language Models.